Expert Evaluation

Introduction

Expert evaluations is a quick way to gain understanding of a product or service. With a structured findings gathering and analysis, the validity of the study can be improved by removing personal biases. One way to do this is to use multiple evaluators. Another way is to agree on the guidelines, principle and 'heuristics' to use.

When to use expert evaluations

Expert evaluations are good way to:

General workflow

This whole process can be done in few hours or feq days, it does not need to be long or difficult!

The general workflow for expert evaluation goes somethign like this:

  1. There in one leading evaluator that runs the study, and uses a few days for setting up and analyzing the results and comnbining the report
  2. The leading evaluator makes it easy for others to participate - he sets up environments, use flows, heuristics, instructions and reporting template for other evaluators. He tell's them that how much time they are expected to spent, usually 2-3.
  3. It is usually good idea to involve not just designers, but also developers and people with varying backgrounds. With chosen principles / heuristics they can take part too. If you have specialists, like accessibility or visual design specialits, encourage then to look at the service from their background. (Repeat the request to use same excel template for reporting, people usually don't want to use it). The evaluators are asked not to discuss the issues together.
  4. After a day or two, the analysis is done. This is best done in workshop where everyone gos through their own findings and then agree on which are the real issues, and what is the severity of each issue. The quick-and dirty version is that the lead developer collects the findigs in one template, organizes by keyword / section and analyses the findings as 'material' for the rerport.
  5. The leading evaluator either facilitates report creation or creates the report himself. (And validates the findings with the team)
  6. And, THIS IS THE IMPORTANT PART, when the report and analysis is done, the lead evaluator has a nagging feeling that many of the small problems are actually related to each other, and it was hard to create the report. This happens on almost every project, if you have the time to marinate in the findings - typically there are just a few bigger problems, that are caused by combination of smaller problems. THAT is your main finding.

Reporting

I've found following reporting format useful.

  1. 1-page executice summary & top findings
  2. Study Background
    • Methods used
    • Services and enviroments studied
    • Workflows/value streans studied
    • Points of views included / study limitations
    • Finding grading / severity scale
  3. Main findings
    • There is one main finding per slide.
    • The slide title is a sentence that describes the problem or related group of problems.
    • The slide body has, usually one big annotated image.
    • If applicable and not immediately clear, an individual recommendation for fixing the problem is suggested
  4. Main recommendations
    • Often the individual recommendations are mutually exclusive to contrarian to each other.
    • In this section, have 2-3 slides of recommended actions, though through as one single recommendation.
  5. Next steps
    • For discussions and agreement on what to do nex

Best practices

Resources and recommended reading

No related notes